14,872 research outputs found

    Heavy QQ(bar) "Fireball" Annihilation to Multiple Vector Bosons

    Full text link
    Drawing analogy of replacing the nucleon by heavy chiral quark QQ, the pion by Goldstone boson GG, and πNN\pi NN coupling by GQQGQQ coupling, we construct a statistical model for QQˉ→nGQ\bar Q \to nG annihilation, i.e. into nn longitudinal weak bosons. This analogy is becoming prescient since the LHC direct bound mQ>611m_Q > 611 GeV implies strong Yukawa coupling. Taking mQ∈(1,2)m_Q \in (1, 2) TeV, the mean number ranges from 6 to over 10, with negligible two or three boson production. With individual t′t' or b′b' decays suppressed either by phase space or quark mixing, and given the strong Yukawa coupling, QQˉ→nVLQ\bar Q\to nV_L is the likely outcome for very heavy QQˉQ\bar Q production at the LHC.Comment: 4 pages, 1 figur

    A statistical framework for the design of microarray experiments and effective detection of differential gene expression

    Full text link
    Four reasons why you might wish to read this paper: 1. We have devised a new statistical T test to determine differentially expressed genes (DEG) in the context of microarray experiments. This statistical test adds a new member to the traditional T-test family. 2. An exact formula for calculating the detection power of this T test is presented, which can also be fairly easily modified to cover the traditional T tests. 3. We have presented an accurate yet computationally very simple method to estimate the fraction of non-DEGs in a set of genes being tested. This method is superior to an existing one which is computationally much involved. 4. We approach the multiple testing problem from a fresh angle, and discuss its relation to the classical Bonferroni procedure and to the FDR (false discovery rate) approach. This is most useful in the analysis of microarray data, where typically several thousands of genes are being tested simultaneously.Comment: 9 pages, 1 table; to appear in Bioinformatic

    Energy Models for One-Carrier Transport in Semiconductor Devices

    Get PDF
    Moment models of carrier transport, derived from the Boltzmann equation, made possible the simulation of certain key effects through such realistic assumptions as energy dependent mobility functions. This type of global dependence permits the observation of velocity overshoot in the vicinity of device junctions, not discerned via classical drift-diffusion models, which are primarily local in nature. It was found that a critical role is played in the hydrodynamic model by the heat conduction term. When ignored, the overshoot is inappropriately damped. When the standard choice of the Wiedemann-Franz law is made for the conductivity, spurious overshoot is observed. Agreement with Monte-Carlo simulation in this regime required empirical modification of this law, or nonstandard choices. Simulations of the hydrodynamic model in one and two dimensions, as well as simulations of a newly developed energy model, the RT model, are presented. The RT model, intermediate between the hydrodynamic and drift-diffusion model, was developed to eliminate the parabolic energy band and Maxwellian distribution assumptions, and to reduce the spurious overshoot with physically consistent assumptions. The algorithms employed for both models are the essentially non-oscillatory shock capturing algorithms. Some mathematical results are presented and contrasted with the highly developed state of the drift-diffusion model

    A Monte Carlo Evaluation of the Efficiency of the PCSE Estimator

    Get PDF
    Panel data characterized by groupwise heteroscedasticity, cross-sectional correlation, and AR(1) serial correlation pose problems for econometric analyses. It is well known that the asymptotically efficient, FGLS estimator (Parks) sometimes performs poorly in finite samples. In a widely cited paper, Beck and Katz (1995) claim that their estimator (PCSE) is able to produce more accurate coefficient standard errors without any loss in efficiency in ¡°practical research situations.¡± This study disputes that claim. We find that the PCSE estimator is usually less efficient than Parks -- and substantially so -- except when the number of time periods is close to the number of cross-sections.Panel data estimation; Monte Carlo analysis; FGLS; Parks; PCSE; finite sample

    A Revisit to Top Quark Forward-Backward Asymmetry

    Full text link
    We analyze various models for the top quark forward-backward asymmetry (AFBtA^t_{FB}) at the Tevatron, using the latest CDF measurements on different AFBtA^t_{FB}s and the total cross section. The axigluon model in Ref. \cite{paul} has difficulties in explaining the large rapidity dependent asymmetry and mass dependent asymmetry simultaneously and the parameter space relevant to AFBtA^t_{FB} is ruled out by the latest dijet search at ATLAS. In contrast to Ref. \cite{cp}, we demonstrate that the large parameter space in this model with a U(1)dU(1)_d flavor symemtry is not ruled out by flavor physics. The tt-channel flavor-violating Z′Z^{\prime} \cite{hitoshi}, W′W^{\prime}\cite{waiyee} and diquark \cite{tim} models all have parameter regions that satisfy different AFBA_{FB} measurements within 1 σ\sigma. However, the heavy Z′Z^{\prime} model which can be marginally consistent with the total cross section is severely constrained by the Tevatron direct search of same-sign top quark pair. The diquark model suffers from too large total cross section and is difficult to fit the ttˉt \bar{t} invariant mass distribution. The electroweak precision constraints on the W′W' model based on Z′Z'-ZZ mixings is estimated and the result is rather weak (mZ′>450m_{Z'} > 450 GeV). Therefore, the heavy W′W^{\prime} model seems to give the best fit for all the measurements. The W′W^{\prime} model predicts the ttˉ+jt\bar{t}+j signal from tW′tW^{\prime} production and is 10%-50% of SM ttˉt\bar{t} at the 7 TeV LHC. Such t+jt+j resonance can serve as the direct test of the W′W^{\prime} model.Comment: 25 pages, 7 figures, 1 tabl
    • …
    corecore